143 research outputs found

    Detecting Ontological Conflicts in Protocols between Semantic Web Services

    Full text link
    The task of verifying the compatibility between interacting web services has traditionally been limited to checking the compatibility of the interaction protocol in terms of message sequences and the type of data being exchanged. Since web services are developed largely in an uncoordinated way, different services often use independently developed ontologies for the same domain instead of adhering to a single ontology as standard. In this work we investigate the approaches that can be taken by the server to verify the possibility to reach a state with semantically inconsistent results during the execution of a protocol with a client, if the client ontology is published. Often database is used to store the actual data along with the ontologies instead of storing the actual data as a part of the ontology description. It is important to observe that at the current state of the database the semantic conflict state may not be reached even if the verification done by the server indicates the possibility of reaching a conflict state. A relational algebra based decision procedure is also developed to incorporate the current state of the client and the server databases in the overall verification procedure

    A Multi-objective Perspective for Operator Scheduling using Fine-grained DVS Architecture

    Full text link
    The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for most available benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.Comment: 18 pages, 6 figures, International journal of VLSI design & Communication Systems (VLSICS

    DietCNN: Multiplication-free Inference for Quantized CNNs

    Full text link
    The rising demand for networked embedded systems with machine intelligence has been a catalyst for sustained attempts by the research community to implement Convolutional Neural Networks (CNN) based inferencing on embedded resource-limited devices. Redesigning a CNN by removing costly multiplication operations has already shown promising results in terms of reducing inference energy usage. This paper proposes a new method for replacing multiplications in a CNN by table look-ups. Unlike existing methods that completely modify the CNN operations, the proposed methodology preserves the semantics of the major CNN operations. Conforming to the existing mechanism of the CNN layer operations ensures that the reliability of a standard CNN is preserved. It is shown that the proposed multiplication-free CNN, based on a single activation codebook, can achieve 4.7x, 5.6x, and 3.5x reduction in energy per inference in an FPGA implementation of MNIST-LeNet-5, CIFAR10-VGG-11, and Tiny ImageNet-ResNet-18 respectively. Our results show that the DietCNN approach significantly improves the resource consumption and latency of deep inference for smaller models, often used in embedded systems. Our code is available at: https://github.com/swadeykgp/DietCNNComment: Supplementary for S. Dey, P. Dasgupta and P. P. Chakrabarti, "DietCNN: Multiplication-free Inference for Quantized CNNs," 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 2023, pp. 1-8, doi: 10.1109/IJCNN54540.2023.1019177

    Complexity of compositional model checking of computation tree logic on simple structures

    Get PDF
    Temporal Logic Model Checking is one of the most potent tools for the veri.cation of .nite state systems. Computation Tree Logic (CTL) has gained popularity because unlike most other logics, CTL model checking of a single transition system can be achieved in polynomial time. However, in most real-life problems, specially in distributed and parallel systems, the system consist of a set of concurrent processes and the veri.cation problem translates to model check the composition of the component processes. Since explicit composition leads to state explosion, verifying the system without actually composing the components is attractive, even for possibly restrictive class of systems.We show that the problem of compositional CTL model checking is PSPACE complete for the class of systems composed of components that are tree-like transition structure and do not interact among themselves. For the simplest forms of existential and universal CTL formulas model checking turns out to be NP complete and coNP complete, respectively. The results hold for both synchronous and asynchronous composition

    Discovering the input assumptions in specification refinement coverage

    Get PDF
    The design of a large chip is typically hierarchical - large modules are recursively expanded into a collection of sub-modules. Each expansion refines the design due to the addition of level specific details. We believe that a similar approach is necessary to scale the capacity of formal property verification technology - as the design gets refined from one level to another, the formal specification must also be refined to reflect the level specific design decisions. At the heart of this approach we propose a checker that identifies the input assumptions under which the refined specification "covers" the original specification. This enables the validation engineer to focus the verification effort on the remaining input scenarios thereby reducing the number of target coverage points for simulation
    • …
    corecore